Search Results: "esr"

20 June 2016

Keerthana Krishnan: 10 Git commands every developer should know

Git is a version control system which is very common among developers. I m not going to explain how git works, because there are plenty of great introductory tutorials on the internet for that. However, I would like to documment the commands I frequently use 1. First, we initialize a git repo. Take care to ensure that you initialise the repo in the folder where all the files you want to commit exists. git init 2. To see all the branches in the current repo along with the current branch : git branch To create a new branch : git branch <branch_name> 3. To change branches : git checkout <branch_name> If the branch doesn t exist, it can be created by using : git checkout -b <branch_name> 4. To see the status of the files, ie. if there are uncommitted changes, files to save etc.. use : git status 5. To stage the changes in the directory for the next commit, use : git add <filename> To stage all files in a directory git add . 6. To commit the staged changes in the directory, use : git commit -m "<commit_message>" 7. To create a new connection to a remote repo, especially to push existing code to a Github repo : git remote add <connection_name> <connection_url> 8. To push the code into the remote repo git push -u <connection_name> <branch_name> 9. To get the contents of a repo git clone <path> 10. To merge another branch to the current (Example: a side branch to the master) git merge <branch_name> For a more detailed list of git commands for easy access check out this nifty cheat sheet. Happy coding!

14 June 2016

Simon McVittie: GTK versioning and distributions

Allison Lortie has provoked a lot of comment with her blog post on a new proposal for how GTK is versioned. Here's some more context from the discussion at the GTK hackfest that prompted that proposal: there's actually quite a close analogy in how new Debian versions are developed. The problem we're trying to address here is the two sides of a trade-off: Historically, GTK has aimed to keep compatible within a major version, where major versions are rather far apart (GTK 1 in 1998, GTK 2 in 2002, GTK 3 in 2011, GTK 4 somewhere in the future). Meanwhile, fixing bugs, improving performance and introducing new features sometimes results in major changes behind the scenes. In an ideal world, these behind-the-scenes changes would never break applications; however, the world isn't ideal. (The Debian analogy here is that as much as we aspire to having the upgrade from one stable release to the next not break anything at all, I don't think we've ever achieved that in practice - we still ask users to read the release notes, even though ideally that wouldn't be necessary.) In particular, the perceived cost of doing a proper ABI break (a fully parallel-installable GTK 4) means there's a strong temptation to make changes that don't actually remove or change C symbols, but are clearly an ABI break, in the sense that an application that previously worked and was considered correct no longer works. A prominent recent example is the theming changes in GTK 3.20: the ABI in terms of functions available didn't change, but what happens when you call those functions changed in an incompatible way. This makes GTK hard to rely on for applications outside the GNOME release cycle, which is a problem that needs to be fixed (without stopping development from continuing). The goal of the plan we discussed today is to decouple the latest branch of development, which moves fast and sometimes breaks API, from the API-stable branches, which only get bug fixes. This model should look quite familiar to Debian contributors, because it's a lot like the way we release Debian and Ubuntu. In Debian, at any given time we have a development branch (testing/unstable) - currently "stretch", the future Debian 9. We also have some stable branches, of which the most recent are Debian 8 "jessie" and Debian 7 "wheezy". Different users of Debian have different trade-offs that lead them to choose one or the other of these. Users who value stability and want to avoid unexpected changes, even at a cost in terms of features and fixes for non-critical bugs, choose to use a stable release, preferably the most recent; they only need to change what they run on top of Debian for OS API changes (for instance webapps, local scripts, or the way they interact with the GUI) approximately every 2 years, or perhaps less often than that with the Debian-LTS project supporting non-current stable releases. Meanwhile, users who value the latest versions and are willing to work with a "moving target" as a result choose to use testing/unstable. The GTK analogy here is really quite close. In the new versioning model, library users who value stability over new things would prefer to use a stable-branch, ideally the latest; library users who want the latest features, the latest bug-fixes and the latest new bugs would use the branch that's the current focus of development. In practice we expect that the latter would be mostly GNOME projects. There's been some discussion at the hackfest about how often we'd have a new stable-branch: the fastest rate that's been considered is a stable-branch every 2 years, similar to Ubuntu LTS and Debian, but there's no consensus yet on whether they will be that frequent in practice. How many stable versions of GTK would end up shipped in Debian depends on how rapidly projects move from "old-stable" to "new-stable" upstream, how much those projects' Debian maintainers are willing to patch them to move between branches, and how many versions the release team will tolerate. Once we reach a steady state, I'd hope that we might have 1 or 2 stable-branched versions active at a time, packaged as separate parallel-installable source packages (a lot like how we handle Qt). GTK 2 might well stay around as an additional active version just from historical inertia. The stable versions are intended to be fully parallel-installable, just like the situation with GTK 1.2, GTK 2 and GTK 3 or with the major versions of Qt. For the "current development" version, I'd anticipate that we'd probably only ship one source package, and do ABI transitions for one version active at a time, a lot like how we deal with libgnome-desktop and the evolution-data-server family of libraries. Those versions would have parallel-installable runtime libraries but non-parallel-installable development files, again similar to libgnome-desktop. At the risk of stretching the Debian/Ubuntu analogy too far, the intermediate "current development" GTK releases that would accompany a GNOME release are like Ubuntu's non-LTS suites: they're more up to date than the fully stable releases (Ubuntu LTS, which has a release schedule similar to Debian stable), but less stable and not supported for as long. Hopefully this plan can meet both of its goals: minimize breakage for applications, while not holding back the development of new APIs.

19 May 2016

Antoine Beaupr : My free software activities, May 2016

Debian Long Term Support (LTS) This is my 6th month working on Debian LTS, started by Raphael Hertzog at Freexian. This is my largest month so far, for which I had requested 20 hours of work.

Xen work I spent the largest amount of time working on the Xen packages. We had to re-roll the patches because it turned out we originally just imported the package from Ubuntu as-is. This was a mistake because that package forked off the Debian packaging a while ago and included regressions in the packaging itself, not just security fixes. So I went ahead and rerolled the whole patchset and tested it on Koumbit's test server. Brian May then completed the uploaded, which included about 40 new patches, mostly from Ubuntu.

Frontdesk duties Next up was the frontdesk duties I had taken this week. This was mostly uneventful, although I had forgotten how to do some of the work and thus ended up doing extensive work on the contributor's documentation. This is especially important since new contributors joined the team! I also did a lot of Debian documentation work in my non-sponsored work below. The triage work involved chasing around missing DLAs, triaging away OpenJDK-6 (for which, let me remind you, security support has ended in LTS), raised the question of Mediawiki maintenance.

Other LTS work I also did a bunch of smaller stuff. Of importance, I can note that I uploaded two advisories that were pending from April: NSS and phpMyAdmin. I also reviewed the patches for the ICU update, since I built the one for squeeze (but didn't have time to upload before squeeze hit end-of-life). I have tried to contribute to the NTP security support but that was way too confusing to me, and I have left it to the package maintainer which seemed to be on top of things, even if things mean complete chaos and confusion in the world of NTP. I somehow thought that situation had improved with the recent investments in ntpsec and ntimed, but unfortunately Debian has not switched to the ntpsec codebase, so it seems that the NTP efforts have diverged in three different projects instead of closing into a single, better codebase.

Future LTS work This is likely to be my last month of work on LTS until September. I will try to contribute a few hours in June, but July and August will be very busy for me outside of Debian, so it's unlikely that I contribute much to the project during the summer. My backlog included those packages which might be of interest to other LTS contributors:
  • libxml2: no upstream fix, but needs fixing!
  • tiff ,3 : same mess
  • libgd2: maintainer contacted
  • samba regression: mailed bug #821811 to try to revive the effort
  • policykit-1: to be investigated
  • p7zip: same

Other free software work

Debian documentation I wrote an detailed short guide to Debian package development, something I felt was missing from the existing corpus, which seems to be too focus in covering all alternatives. My guide is opinionated: I believe there is a right and wrong way of doing things, or at least, there are best practices, especially when just patching packages. I ended up retroactively publishing that as a blog post - now I can simply tag an item with blog and it shows up in the blog. (Of course, because of a mis-configuration on my side, I have suffered from long delays publishing to Debian planet, so all the posts dates are off in the Planet RSS feed. This will hopefully be resolved around the time this post is published, but this allowed me to get more familiar with the Planet Venus software, as detailed in that other article.) Apart from the guide, I have also done extensive research to collate information that allowed me to create workflow graphs of the various Debian repositories, which I have published in the Debian Release section of the Debian wiki. Here is the graph: It helps me understand how packages flow between different suites and who uploads what where. This emerged after I realized I didn't really understand how "proposed updates" worked. Since we are looking at implementing a similar process for the security queue, I figured it was useful to show what changes would happen, graphically. I have also published a graph that describes the relations between different software that make up the Debian archive. The idea behind this is also to provide an overview of what happens when you upload a package in the Debian archive, but it is more aimed at Debian developers trying to figure out why things are not working as expected. The graphs were done with Graphviz, which allowed me to link to various components in the graph easily, which is neat. I also prefered Graphviz over Dia or other tools because it is easier to version and I don't have to bother (too much) about the layout and tweaking the looks. The downside is, of course, that when Graphviz makes the wrong decision, it's actually pretty hard to make it do the right thing, but there are various workarounds that I have found that made the graphs look pretty good. The source is of course available in git but I feel all this documentation (including the guide) should go in a more official document somewhere. I couldn't quite figure out where. Advice on this would be of course welcome.

Ikiwiki I have made yet another plugin for Ikiwiki, called irker, which enables wikis to send notifications to IRC channels, thanks to the simple irker bot. I had trouble with Irker in the past, since it was not quite reliable: it would disappear from channels and not return when we'd send it a notification. Unfortunately, the alternative, the KGB bot is much heavier: each repository needs a server-side, centralized configuration to operate properly. Irker's design is simpler and more adapted to a simple plugin like this. Let's hope it will work reliably enough for my needs. I have also suggested improvements to the footnotes styles, since they looked like hell in my Debian guide. It turns out this was an issue with the multimarkdown plugin that doesn't use proper semantic markup to identify footnotes. The proper fix is to enable footnotes in the default Discount plugin, which will require another, separate patch. Finally, I have done some improvements (I hope!) on the layout of this theme. I made the top header much lighter and transparent to work around an issue where followed anchors would be hidden under the top header. I have also removed the top menu made out of the sidebar plugin because it was cluttering the display too much. Those links are all on the frontpage anyways and I suspect people were not using them so much. The code is, as before, available in this git repository although you may want to start from the new ikistrap theme that is based on Bootstrap 4 and that may eventually be merged in ikiwiki directly.

DNS diagnostics Through this interesting overview of various *ping tools, I got found out about the dnsdiag tool which currently allows users to do DNS traces, tampering detection and ping over DNS. In the hope of packaging it into Debian, I have requested clarifications regarding a modification to the DNSpython library the tool uses. But I went even further and boldly opened a discussion about replacing DNSstuff, the venerable DNS diagnostic tools that is now commercial. It is somewhat surprising that there is no software that has even been publicly released that does those sanity checks for DNS, given how old DNS is. Incidentally, I have also requested smtpping to be packaged in Debian as well but httping is already packaged.

Link checking In the process of writing this article, I suddenly remembered that I constantly make mistakes in the various links I post on my site. So I started looking at a link checker, another tool that should be well established but that, surprisingly, is not quite there yet. I have found this neat software written in Python called LinkChecker. Unfortunately, it is basically broken in Debian, so I had to do a non-maintainer upload to fix that old bug. I managed to force myself to not take over maintainership of this orphaned package but I may end up doing just that if no one steps up the next time I find issues in the package. One of the problems I had checking links in my blog is that I constantly refer to sites that are hostile to bots, like the Debian bugtracker and MoinMoin wikis. So I published a patch that adds a --no-robots flag to be able to crawl those sites effectively. I know there is the W3C tool but it's written in Perl, and there's probably zero chance for me to convince those guys to bypass robots exclusion rules, so I am sticking to Linkchecker.

Other Debian packaging work At my request, Drush has finally been removed from Debian. Hopefully someone else will pick up that work, but since it basically needs to be redone from scratch, there was no sense in keeping it in the next release of Debian. Similarly, Semanticscuttle was removed from Debian as well. I have uploaded new versions of tuptime, sopel and smokeping. I have also file a Request For Help for Smokeping. I am happy to report there was a quick response and people will be stepping up to help with the maintenance of that venerable monitoring software.

Background radiation Finally, here's the generic background noise of me running around like a chicken with his head cut off: Finally, I should mention that I will be less active in the coming months, as I will be heading outside as the summer finally came! I somewhat feel uncomfortable documenting publicly my summer here, as I am more protective of my privacy than I was before on this blog. But we'll see how it goes, maybe you'll hear non-technical articles here again soon!

8 May 2016

Satyam Zode: Google Summer of Code 2016 With Debian Reproducible Builds : Introduction

This is the first blog post among series of posts which I will be writing throughout the summer about Google Summer of Code 2016 under Debian Reproducible Builds Experience. Introduction: I am Satyam Zode I am a final year Computer Science student (Satyam_z on IRC). I live in Pune, India (GMT +5:30). I am pursuing my undergraduate degree in Computer Engineering from Pune Institute of Computer Technology, Pune. I have been programming for the past 4 years. I am well versed in C/C++, Python3, and Golang. My Alioth and Github handles are satyamz-guest and satyamz respectively. I have been using GNU/Linux and free software from last four years. I am an open source enthusiast and I have been following Hacker culture since past three years. Accepted into Google Summer of Code 2016 under Debian Project: I am glad that I have got an opportunity to contribute to the Debian Project via Google Summer of Code 2016. I am accepted for project Improving diffoscope tool and reproducibility of Debian packages. This Summer and beyond I will be working with Debian Reproducible Builds team to improve the debbugging tool called Diffoscope (previously known as debbindiff). Thanks a bunch to Debian community, Lunar, Holger Levsen, Reiner Herrmann, Mattia Rizzolo and reproducible-builds folks for giving me this opportunity. Here is my GSoC'16 Proposal. And Yay! It really feels great :smile: Project details: I will be working on Diffoscope tool which is debbugging tool developed under reproducible-builds effort. Basically, Diffoscope compares two files and shows the difference in text and html format. Diffoscope is mainly developed to compare two Debian packages which may consist of binary files, tar files, text files etc. Diffoscope helps to identify difference between two Debian packages with respect to timestamps, file ordering etc. Diffoscope will try to get to the bottom of what makes files or directories different. It will recursively unpack archives of many kinds and transform various binary formats into more human readable form to compare them. It can compare two tarballs, ISO images, Debian packages or PDF just as easily. Diffoscope helps to identify the reproduciblity of Debian packages. During this summer I will be improving Diffoscope. I will be mainly working on: My next blog post will be regarding community bonding. Thanks for reading :)

21 March 2016

Lunar: Reproducible builds: week 47 in Stretch cycle

What happened in the reproducible builds effort between March 13th and March 19th 2016:

Toolchain fixes
  • Petter Reinholdtsen uploaded naturaldocs/1.51-1.1 which makes the output reproducible. Original patch by Chris Lamb.
  • Damyan Ivanov uploaded libpdf-api2-perl/2.025-2 which will make internal font ID reproducible.
  • Christian Hofstaedtler uploaded ruby2.3/2.3.0-5 which sets gzip embedded mtime field to fixed value for rdoc-generated compressed javascript data.

Packages fixed The following packages have become reproducible due to changes in their build dependencies: diction, doublecmd, ruby-hiredis, vdr-plugin-epgsearch. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues, but not all of them: Patches submitted which have not made their way to the archive yet:
  • #818128 on nethack by Reiner Herrmann: implement support for SOURCE_DATE_EPOCH, set LC_ALL to C, and ensure deterministic build order when running parallel builds.
  • #818111 on debian-keyring by Satyam Zode: fix the order of files in md5sums.
  • #818067 on ncurses by Niels Thykier: strip trailing whitespaces introduced when using dash as system shell.
  • #818230 on aircrack-ng by Reiner Herrmann: build assembly code as a separate .o file.
  • #818419 on mutt by Daniel Shahaf: use C locale when listing files to be put in README.Patches.
  • #818430 on ruby-coveralls by Dhole: ensure UTC is used as the timezone when generating the documentation.
  • #818686 on littlewizard by Reiner Herrmann: use the C locale in the script for iterating over the files.
  • #818704 on strigi by Reiner Herrmann: sort keys when traversing hashes in makecode.pl.

Package reviews 44 reviews have been removed, 40 added and 5 updated in the previous week. Chris Lamb has reported 16 FTBFS.

10 March 2016

Mike Hommey: RIP Iceweasel, 13 Nov 2006 10 Mar 2016

This took longer than it should have, but a page is now officially turned. I uploaded Firefox and Firefox ESR to Debian unstable. They will have to go through the Debian NEW queue because they are new source packages, so won t be immediately available, but they should arrive soon enough. People using Iceweasel from Debian unstable will be upgraded to Firefox ESR. Debian stable will receive Firefox ESR after Iceweasel/Firefox ESR38 is end-of-lifed, in about 3 months. Thanks go to Sylvestre Ledru, Mike Connor (the same who filed bug 354622) and Stefano Zacchiroli.

1 February 2016

Ritesh Raj Sarraf: State of Email Clients on Linux Based Platforms

I've been trying to catch up on my reading list (Thanks to rss2email, now I can hold the list longer, than just marking all as read). And one item from last year's end worth spending time was Thunderbird. Thunderbird has been the email client of choice for many users. The main reason for it being popular has been, in my opinion, it being cross platform. Because that allows users an easy migration path across platforms. It also bring persistence, in terms of features and workflows, to the end users. Perhaps that must have been an important reason for many distributions (Ubuntu) and service providers to promote it as the default email client. A Windows/Mac user migrating to Ubuntu will have a lot better experience if they see familiar tools, and their data and workflows being intact. Mozilla must have executed its plan pretty well, to have been able to get it rolling so far. Because other attempts elsewhere (KDE4 Windows) weren't so easy. Part of the reason maybe that any time a new disruptive update is rolled on (KDE4, GNOME3), a lot many frustrated users are born. It is not that people don't want change. Its just that no one likes to see things break. But unfortunately, in Free Software / Open Source world, that is taken lightly. That's one reason why it takes Mozilla so so so long to implement Maildir in TB, when others (Evolution) have had it for so long. So, recently, Mozilla announced its plans to drop Thunderbird development. It is not something new. Anyone using TB knows how long it has been in Maintenance/ESR mode. What was interesting on LWN was the comments. People talked a lot about DE Native Email clients - Kmail, Sylpheed. TUI Clients and these days Browser based clients. Surprisingly, not much was talked about Evolution. My recent move to GNOME has made me look into letting go of old tools/workflows, and try to embrace newer ones. Of them has been GNOME itself. Changing workflows for email was difficult and frustrating. But knowing that TB doesn't have a bright future, it was important to look for alternatives. Just having waited for Maildir and GTK3 port of TB for so long, was enough. On GNOME, Evolution, may give an initial impression of being in Maintenance mode. Especially given that most GNOME apps are now moving to the new UI, which is more touch friendly. And also, because there were other efforts to have another email client on GNOME, I think it is Yorba. But even in its current form, Evolution is a pretty impressive email client Personal Information Management tool. It already is ported to GTK3. Which implies it is capable of responding to Touch events. It sure could have a revised Touch UI, like what is happening with other GNOME Apps. But I'm happy that it has been defered for now. Revising Evolution won't be an easy task, and knowing that GNOME too is understaffed, breaking a perfectly working tool won't be a good idea. My intent with this blog post is to give credit to my favorite GNOME application, i.e. Evolution. So next time you are looking for an email client alternative, give Evolution a try. Today, it already does:
  • Touch UI
  • Maildir
  • Microsoft Exchange
  • GTK3
  • Addressbook, Notes, Tasks, Calendar - Most Standards Based and Google Services compatible
  • RSS Feed Manager
  • And many more that I may not have been using
The only missing piece is being cross-platform. But given the trend, and available resources, I think that path is not worthy of trying. Keep It Simple. Support one platform and support it well.

Categories:

Keywords:

Like:

2 December 2015

Norbert Preining: SJW attitude in science

Recently, Eric Raymond, famous for is The Cathedral and the Bazaar , stepped forward to speak out against mixing social agenda, like equal treatment for everyone outside the white straight group, with meritocracy, the evaluation of one solely based on his/her contribution. And without fail, the SJW side of the Internet didn t take much time to munch down on Raymond like hungry wolves in a long winter: Coraline Ada Ehmke (should that recall Ada Lovelace?), Tim Chevalier, Matthew Garret, just to name a few. rastafari2 The arguments are quite easy to summarize: The meritocracy party proposes that One s contribution should only be evaluated based on the content and the quality , while the SJW party asserts that in case the submitter as from a minority group, in particular everyone outside the white straight group, the contribution has to be accepted with higher probability (or without discussion) to ensure equality. (Added here for clarification: A SJW is someone who puts the agenda of anti-genderization and anti-biasization (nice word) above all other objectives, often by quoting scientific results on existing and not deniable bias) Well, I am a scientist, and I can tell you just one thing: I simply don t give a shit for whether someone is white, black, red, green, red, straight, gay, a Rastafari or Pastafari (well, to be honest, I really give an extra 3 plus points to Pastafari!), really, I check their proofs. And if they are rubbish, they are rubbish. If they are ok, they are ok. Let us for a moment assume that the world of research would work the same way as the proposed world of Ehmke, Chevalier, Garret, and all the other SJW: A lesbian female black is submitting an article to a scientific journal (first of all, as a referee I wouldn t even know about her sexual interest, nor her color, nor her background in most of the cases), and the honest referee reports would dare to reject the paper due to technical and methodological insufficiencies. A very common case. Now, the average SJW (including the above named, according to their blog posts) would require us to be open and, well, publish a rubbish paper just because it is written by an non-white-non-male author. What should I say well stupidity seemingly does not have a limit. Hopefully software projects around the world do not fall into this stupid trap, and continue evaluating contributions solely on their actual merit. This is all I am asking for, quite in contrast to the SJW groupies. And this is also what Raymond is asking for so I have not the slightest idea why anyone around this planet sees a need to step up and become noisy.

29 November 2015

Matthew Garrett: What is hacker culture?

Eric Raymond, author of The Cathedral and the Bazaar (an important work describing the effectiveness of open collaboration and development), recently wrote a piece calling for "Social Justice Warriors" to be ejected from the hacker community. The primary thrust of his argument is that by calling for a removal of the "cult of meritocracy", these SJWs are attacking the central aspect of hacker culture - that the quality of code is all that matters.

This argument is simply wrong.

Eric's been involved in software development for a long time. In that time he's seen a number of significant changes. We've gone from computers being the playthings of the privileged few to being nearly ubiquitous. We've moved from the internet being something you found in universities to something you carry around in your pocket. You can now own a computer whose CPU executes only free software from the moment you press the power button. And, as Eric wrote almost 20 years ago, we've identified that the "Bazaar" model of open collaborative development works better than the "Cathedral" model of closed centralised development.

These are huge shifts in how computers are used, how available they are, how important they are in people's lives, and, as a consequence, how we develop software. It's not a surprise that the rise of Linux and the victory of the bazaar model coincided with internet access becoming more widely available. As the potential pool of developers grew larger, development methods had to be altered. It was no longer possible to insist that somebody spend a significant period of time winning the trust of the core developers before being permitted to give feedback on code. Communities had to change in order to accept these offers of work, and the communities were better for that change.

The increasing ubiquity of computing has had another outcome. People are much more aware of the role of computing in their lives. They are more likely to understand how proprietary software can restrict them, how not having the freedom to share software can impair people's lives, how not being able to involve themselves in software development means software doesn't meet their needs. The largest triumph of free software has not been amongst people from a traditional software development background - it's been the fact that we've grown our communities to include people from a huge number of different walks of life. Free software has helped bring computing to under-served populations all over the world. It's aided circumvention of censorship. It's inspired people who would never have considered software development as something they could be involved in to develop entire careers in the field. We will not win because we are better developers. We will win because our software meets the needs of many more people, needs the proprietary software industry either can not or will not satisfy. We will win because our software is shaped not only by people who have a university degree and a six figure salary in San Francisco, but because our contributors include people whose native language is spoken by so few people that proprietary operating system vendors won't support it, people who live in a heavily censored regime and rely on free software for free communication, people who rely on free software because they can't otherwise afford the tools they would need to participate in development.

In other words, we will win because free software is accessible to more of society than proprietary software. And for that to be true, it must be possible for our communities to be accessible to anybody who can contribute, regardless of their background.

Up until this point, I don't think I've made any controversial claims. In fact, I suspect that Eric would agree. He would argue that because hacker culture defines itself through the quality of contributions, the background of the contributor is irrelevant. On the internet, nobody knows that you're contributing from a basement in an active warzone, or from a refuge shelter after escaping an abusive relationship, or with the aid of assistive technology. If you can write the code, you can participate.

Of course, this kind of viewpoint is overly naive. Humans are wonderful at noticing indications of "otherness". Eric even wrote about his struggle to stop having a viscerally negative reaction to people of a particular race. This happened within the past few years, so before then we can assume that he was less aware of the issue. If Eric received a patch from someone whose name indicated membership of this group, would there have been part of his subconscious that reacted negatively? Would he have rationalised this into a more critical analysis of the patch, increasing the probability of rejection? We don't know, and it's unlikely that Eric does either.

Hacker culture has long been concerned with good design, and a core concept of good design is that code should fail safe - ie, if something unexpected happens or an assumption turns out to be untrue, the desirable outcome is the one that does least harm. A command that fails to receive a filename as an argument shouldn't assume that it should modify all files. A network transfer that fails a checksum shouldn't be permitted to overwrite the existing data. An authentication server that receives an unexpected error shouldn't default to granting access. And a development process that may be subject to unconscious bias should have processes in place that make it less likely that said bias will result in the rejection of useful contributions.

When people criticise meritocracy, they're not criticising the concept of treating contributions based on their merit. They're criticising the idea that humans are sufficiently self-aware that they will be able to identify and reject every subconscious prejudice that will affect their treatment of others. It's not a criticism of a desirable goal, it's a criticism of a flawed implementation. There's evidence that organisations that claim to embody meritocratic principles are more likely to reward men than women even when everything else is equal. The "cult of meritocracy" isn't the belief that meritocracy is a good thing, it's the belief that a project founded on meritocracy will automatically be free of bias.

Projects like the Contributor Covenant that Eric finds so objectionable exist to help create processes that (at least partially) compensate for our flaws. Review of our processes to determine whether we're making poor social decisions is just as important as review of our code to determine whether we're making poor technical decisions. Just as the bazaar overtook the cathedral by making it easier for developers to be involved, inclusive communities will overtake "pure meritocracies" because, in the long run, these communities will produce better output - not just in terms of the quality of the code, but also in terms of the ability of the project to meet the needs of a wider range of people.

The fight between the cathedral and the bazaar came from people who were outside the cathedral. Those fighting against the assumption that meritocracies work may be outside what Eric considers to be hacker culture, but they're already part of our communities, already making contributions to our projects, already bringing free software to more people than ever before. This time it's Eric building a cathedral and decrying the decadent hordes in their bazaar, Eric who's failed to notice the shift in the culture that surrounds him. And, like those who continued building their cathedrals in the 90s, it's Eric who's now irrelevant to hacker culture.

(Edited to add: for two quite different perspectives on why Eric's wrong, see Tim's and Coraline's posts)

comment count unavailable comments

3 November 2015

Joey Hess: STM Region contents

concurrent-output released yesterday got a lot of fun features. It now does full curses-style minimization of the output, to redraw updated lines with optimal efficiency. And supports multiline regions/wrapping too long lines. And allows the user to embed ANSI colors in a region. 3 features that are in some tension and were fun to implement all together. But I have a more interesting feature to blog about... I've added the ability for the content of a Region to be determined by a (STM transaction). Here, for example, is a region that's a clock:
timeDisplay :: TVar UTCTime -> STM Text
timeDisplay tv = T.pack . show <$> readTVar tv
clockRegion :: IO ConsoleRegionHandle
clockRegion = do
    tv <- atomically . newTVar =<< getCurrentTime
    r <- openConsoleRegion Linear
    setConsoleRegion r (timeDisplay tv)
    async $ forever $ do
        threadDelay 1000000 -- 1 sec
        atomically . (writeTVar tv) =<< getCurrentTime
    return r
There's something magical about this. Whenever a new value is written into the TVar, concurrent-output automatically knows that this region needs to be updated. How does it know how to do that? Magic of STM. Basically, concurrent-output composes all the STM transactions of Regions, and asks STM to wait until there's something new to display. STM keeps track of whatever TVars might be looked at, and so can put the display thread to sleep until there's a change to display. Using STM I've gotten extensability for free, due to the nice ways that STM transactions compose. A few other obvious things to do with this: Compose 2 regions with padding so they display on the same line, left and right aligned. Trim a region's content to the display width. (Handily exported by concurrent-output in a TVar for this kind of thing.)
I'm tempted to write a console spreadsheet using this. Each visible cell of the spreadsheet would have its own region, that uses a STM transaction to display. Plain data Cells would just display their current value. Cells that contain a function would read the current values of other Cells, and use that to calculate what to display. Which means that a Cell containing a function would automatically update whenever any of the Cells that it depends on were updated! Do you think that a simple interactive spreadsheet built this way would be more than 100 lines of code?

20 August 2015

Simon Kainz: vim in Heidelberg

Following the tradition of Love Locks, apparently there is someone really in love with vim in Heidelberg! Valerie Found at the Old Bridge in Heidelberg during DebConf15.

2 May 2015

Dimitri Fontaine: Quicklisp and debian

Common Lisp users are very happy to use Quicklisp when it comes to downloading and maintaining dependencies between their own code and the librairies it is using. Sometimes I am pointed that when compared to other programming languages Common Lisp is lacking a lot in the batteries included area. After having had to package about 50 common lisp librairies for debian I can tell you that I politely disagree with that. And this post is about the tool and process I use to maintain all those librairies. Quicklisp is good at ensuring a proper distribution of all those libs it supports and actually tests that they all compile and load together, so I've been using it as my upstream for debian packaging purposes. Using Quicklisp here makes my life much simpler as I can grovel through its metadata and automate most of the maintenance of my cl related packages. It's all automated in the ql-to-deb software which, unsurprisingly, has been written in Common Lisp itself. It's a kind of a Quicklisp client that will fetch Quicklisp current list of releases with version numbers and compare to the list of managed packages for debian in order to then build new version automatically. The current workflow I'm using begins with using ql-to-deb is to check for the work to be done today:
$ /vagrant/build/bin/ql-to-deb check
Fetching "http://beta.quicklisp.org/dist/quicklisp.txt"
Fetching "http://beta.quicklisp.org/dist/quicklisp/2015-04-07/releases.txt"
update: cl+ssl cl-csv cl-db3 drakma esrap graph hunchentoot local-time lparallel nibbles qmynd trivial-backtrace
upload: hunchentoot
After careful manual review of the automatic decision, let's just update all what check decided would have to be:
$ /vagrant/build/bin/ql-to-deb update
Fetching "http://beta.quicklisp.org/dist/quicklisp.txt"
Fetching "http://beta.quicklisp.org/dist/quicklisp/2015-04-07/releases.txt"
Updating package cl-plus-ssl from 20140826 to 20150302.
     see logs in "//tmp/ql-to-deb/logs//cl-plus-ssl.log"
Fetching "http://beta.quicklisp.org/archive/cl+ssl/2015-03-02/cl+ssl-20150302-git.tgz"
Checksum test passed.
     File: "/tmp/ql-to-deb/archives/cl+ssl-20150302-git.tgz"
      md5: 61d9d164d37ab5c91048827dfccd6835
Building package cl-plus-ssl
Updating package cl-csv from 20140826 to 20150302.
     see logs in "//tmp/ql-to-deb/logs//cl-csv.log"
Fetching "http://beta.quicklisp.org/archive/cl-csv/2015-03-02/cl-csv-20150302-git.tgz"
Checksum test passed.
     File: "/tmp/ql-to-deb/archives/cl-csv-20150302-git.tgz"
      md5: 32f6484a899fdc5b690f01c244cd9f55
Building package cl-csv
Updating package cl-db3 from 20131111 to 20150302.
     see logs in "//tmp/ql-to-deb/logs//cl-db3.log"
Fetching "http://beta.quicklisp.org/archive/cl-db3/2015-03-02/cl-db3-20150302-git.tgz"
Checksum test passed.
     File: "/tmp/ql-to-deb/archives/cl-db3-20150302-git.tgz"
      md5: 578896a3f60f474742f240b703f8c5f5
Building package cl-db3
Updating package cl-drakma from 1.3.11 to 1.3.13.
     see logs in "//tmp/ql-to-deb/logs//cl-drakma.log"
Fetching "http://beta.quicklisp.org/archive/drakma/2015-04-07/drakma-1.3.13.tgz"
Checksum test passed.
     File: "/tmp/ql-to-deb/archives/drakma-1.3.13.tgz"
      md5: 3b548bce10728c7a058f19444c8477c3
Building package cl-drakma
Updating package cl-esrap from 20150113 to 20150302.
     see logs in "//tmp/ql-to-deb/logs//cl-esrap.log"
Fetching "http://beta.quicklisp.org/archive/esrap/2015-03-02/esrap-20150302-git.tgz"
Checksum test passed.
     File: "/tmp/ql-to-deb/archives/esrap-20150302-git.tgz"
      md5: 8b198d26c27afcd1e9ce320820b0e569
Building package cl-esrap
Updating package cl-graph from 20141106 to 20150407.
     see logs in "//tmp/ql-to-deb/logs//cl-graph.log"
Fetching "http://beta.quicklisp.org/archive/graph/2015-04-07/graph-20150407-git.tgz"
Checksum test passed.
     File: "/tmp/ql-to-deb/archives/graph-20150407-git.tgz"
      md5: 3894ef9262c0912378aa3b6e8861de79
Building package cl-graph
Updating package hunchentoot from 1.2.29 to 1.2.31.
     see logs in "//tmp/ql-to-deb/logs//hunchentoot.log"
Fetching "http://beta.quicklisp.org/archive/hunchentoot/2015-04-07/hunchentoot-1.2.31.tgz"
Checksum test passed.
     File: "/tmp/ql-to-deb/archives/hunchentoot-1.2.31.tgz"
      md5: 973eccfef87e81f1922424cb19884d63
Building package hunchentoot
Updating package cl-local-time from 20150113 to 20150407.
     see logs in "//tmp/ql-to-deb/logs//cl-local-time.log"
Fetching "http://beta.quicklisp.org/archive/local-time/2015-04-07/local-time-20150407-git.tgz"
Checksum test passed.
     File: "/tmp/ql-to-deb/archives/local-time-20150407-git.tgz"
      md5: 7be4a31d692f5862014426a53eb1e48e
Building package cl-local-time
Updating package cl-lparallel from 20141106 to 20150302.
     see logs in "//tmp/ql-to-deb/logs//cl-lparallel.log"
Fetching "http://beta.quicklisp.org/archive/lparallel/2015-03-02/lparallel-20150302-git.tgz"
Checksum test passed.
     File: "/tmp/ql-to-deb/archives/lparallel-20150302-git.tgz"
      md5: dbda879d0e3abb02a09b326e14fa665d
Building package cl-lparallel
Updating package cl-nibbles from 20141106 to 20150407.
     see logs in "//tmp/ql-to-deb/logs//cl-nibbles.log"
Fetching "http://beta.quicklisp.org/archive/nibbles/2015-04-07/nibbles-20150407-git.tgz"
Checksum test passed.
     File: "/tmp/ql-to-deb/archives/nibbles-20150407-git.tgz"
      md5: 2ffb26241a1b3f49d48d28e7a61b1ab1
Building package cl-nibbles
Updating package cl-qmynd from 20141217 to 20150302.
     see logs in "//tmp/ql-to-deb/logs//cl-qmynd.log"
Fetching "http://beta.quicklisp.org/archive/qmynd/2015-03-02/qmynd-20150302-git.tgz"
Checksum test passed.
     File: "/tmp/ql-to-deb/archives/qmynd-20150302-git.tgz"
      md5: b1cc35f90b0daeb9ba507fd4e1518882
Building package cl-qmynd
Updating package cl-trivial-backtrace from 20120909 to 20150407.
     see logs in "//tmp/ql-to-deb/logs//cl-trivial-backtrace.log"
Fetching "http://beta.quicklisp.org/archive/trivial-backtrace/2015-04-07/trivial-backtrace-20150407-git.tgz"
Checksum test passed.
     File: "/tmp/ql-to-deb/archives/trivial-backtrace-20150407-git.tgz"
      md5: 762b0acf757dc8a2a6812d2f0f2614d9
Building package cl-trivial-backtrace
Quite simple. To be totally honnest, I first had a problem with the parser generator library esrap wherein the README documentation changed to be a README.org file, and I had to tell my debian packaging about that. See the 0ef669579cf7c07280eae7fe6f61f1bd664d337e commit to ql-to-deb for details. What about trying to install those packages locally? That's usually a very good test. Sometimes some dependencies are missing at the dpkg command line, so another apt-get install -f is needed:
$ /vagrant/build/bin/ql-to-deb install
sudo dpkg -i /tmp/ql-to-deb/cl-plus-ssl_20150302-1_all.deb /tmp/ql-to-deb/cl-csv_20150302-1_all.deb /tmp/ql-to-deb/cl-csv-clsql_20150302-1_all.deb /tmp/ql-to-deb/cl-csv-data-table_20150302-1_all.deb /tmp/ql-to-deb/cl-db3_20150302-1_all.deb /tmp/ql-to-deb/cl-drakma_1.3.13-1_all.deb /tmp/ql-to-deb/cl-esrap_20150302-1_all.deb /tmp/ql-to-deb/cl-graph_20150407-1_all.deb /tmp/ql-to-deb/cl-hunchentoot_1.2.31-1_all.deb /tmp/ql-to-deb/cl-local-time_20150407-1_all.deb /tmp/ql-to-deb/cl-lparallel_20150302-1_all.deb /tmp/ql-to-deb/cl-nibbles_20150407-1_all.deb /tmp/ql-to-deb/cl-qmynd_20150302-1_all.deb /tmp/ql-to-deb/cl-trivial-backtrace_20150407-1_all.deb
(Reading database ... 79689 files and directories currently installed.)
Preparing to unpack .../cl-plus-ssl_20150302-1_all.deb ...
Unpacking cl-plus-ssl (20150302-1) over (20140826-1) ...
Selecting previously unselected package cl-csv.
Preparing to unpack .../cl-csv_20150302-1_all.deb ...
Unpacking cl-csv (20150302-1) ...
Selecting previously unselected package cl-csv-clsql.
Preparing to unpack .../cl-csv-clsql_20150302-1_all.deb ...
Unpacking cl-csv-clsql (20150302-1) ...
Selecting previously unselected package cl-csv-data-table.
Preparing to unpack .../cl-csv-data-table_20150302-1_all.deb ...
Unpacking cl-csv-data-table (20150302-1) ...
Selecting previously unselected package cl-db3.
Preparing to unpack .../cl-db3_20150302-1_all.deb ...
Unpacking cl-db3 (20150302-1) ...
Preparing to unpack .../cl-drakma_1.3.13-1_all.deb ...
Unpacking cl-drakma (1.3.13-1) over (1.3.11-1) ...
Preparing to unpack .../cl-esrap_20150302-1_all.deb ...
Unpacking cl-esrap (20150302-1) over (20150113-1) ...
Preparing to unpack .../cl-graph_20150407-1_all.deb ...
Unpacking cl-graph (20150407-1) over (20141106-1) ...
Preparing to unpack .../cl-hunchentoot_1.2.31-1_all.deb ...
Unpacking cl-hunchentoot (1.2.31-1) over (1.2.29-1) ...
Preparing to unpack .../cl-local-time_20150407-1_all.deb ...
Unpacking cl-local-time (20150407-1) over (20150113-1) ...
Preparing to unpack .../cl-lparallel_20150302-1_all.deb ...
Unpacking cl-lparallel (20150302-1) over (20141106-1) ...
Preparing to unpack .../cl-nibbles_20150407-1_all.deb ...
Unpacking cl-nibbles (20150407-1) over (20141106-1) ...
Preparing to unpack .../cl-qmynd_20150302-1_all.deb ...
Unpacking cl-qmynd (20150302-1) over (20141217-1) ...
Preparing to unpack .../cl-trivial-backtrace_20150407-1_all.deb ...
Unpacking cl-trivial-backtrace (20150407-1) over (20120909-2) ...
Setting up cl-plus-ssl (20150302-1) ...
dpkg: dependency problems prevent configuration of cl-csv:
 cl-csv depends on cl-interpol; however:
  Package cl-interpol is not installed.
dpkg: error processing package cl-csv (--install):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of cl-csv-clsql:
 cl-csv-clsql depends on cl-csv; however:
  Package cl-csv is not configured yet.
dpkg: error processing package cl-csv-clsql (--install):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of cl-csv-data-table:
 cl-csv-data-table depends on cl-csv; however:
  Package cl-csv is not configured yet.
dpkg: error processing package cl-csv-data-table (--install):
 dependency problems - leaving unconfigured
Setting up cl-db3 (20150302-1) ...
Setting up cl-drakma (1.3.13-1) ...
Setting up cl-esrap (20150302-1) ...
Setting up cl-graph (20150407-1) ...
Setting up cl-local-time (20150407-1) ...
Setting up cl-lparallel (20150302-1) ...
Setting up cl-nibbles (20150407-1) ...
Setting up cl-qmynd (20150302-1) ...
Setting up cl-trivial-backtrace (20150407-1) ...
Setting up cl-hunchentoot (1.2.31-1) ...
Errors were encountered while processing:
 cl-csv
 cl-csv-clsql
 cl-csv-data-table
Let's make sure that our sid users will be happy with the update here:
$ sudo apt-get install -f
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Correcting dependencies... Done
The following packages were automatically installed and are no longer required:
  g++-4.7 git git-man html2text libaugeas-ruby1.8 libbind9-80
  libclass-isa-perl libcurl3-gnutls libdns88 libdrm-nouveau1a
  libegl1-mesa-drivers libffi5 libgraphite3 libgssglue1 libisc84 libisccc80
  libisccfg82 liblcms1 liblwres80 libmpc2 libopenjpeg2 libopenvg1-mesa
  libpoppler19 librtmp0 libswitch-perl libtiff4 libwayland-egl1-mesa luatex
  openssh-blacklist openssh-blacklist-extra python-chardet python-debian
  python-magic python-pkg-resources python-six ttf-dejavu-core ttf-marvosym
Use 'apt-get autoremove' to remove them.
The following extra packages will be installed:
  cl-interpol
The following NEW packages will be installed:
  cl-interpol
0 upgraded, 1 newly installed, 0 to remove and 51 not upgraded.
3 not fully installed or removed.
Need to get 20.7 kB of archives.
After this operation, 135 kB of additional disk space will be used.
Do you want to continue? [Y/n] 
Get:1 http://ftp.fr.debian.org/debian/ sid/main cl-interpol all 0.2.1-2 [20.7 kB]
Fetched 20.7 kB in 0s (84.5 kB/s)
debconf: unable to initialize frontend: Dialog
debconf: (Dialog frontend will not work on a dumb terminal, an emacs shell buffer, or without a controlling terminal.)
debconf: falling back to frontend: Readline
Selecting previously unselected package cl-interpol.
(Reading database ... 79725 files and directories currently installed.)
Preparing to unpack .../cl-interpol_0.2.1-2_all.deb ...
Unpacking cl-interpol (0.2.1-2) ...
Setting up cl-interpol (0.2.1-2) ...
Setting up cl-csv (20150302-1) ...
Setting up cl-csv-clsql (20150302-1) ...
Setting up cl-csv-data-table (20150302-1) ...
All looks fine, time to sign those packages. There's a trick here, where you want to be sure you're using a GnuPG setup that allows you to enter your passphrase only once, see ql-to-deb vm setup for details, and the usual documentations about all that if you're interested into the details.
$ /vagrant/build/bin/ql-to-deb sign
 signfile /tmp/ql-to-deb/cl-plus-ssl_20150302-1.dsc 60B1CB4E
 signfile /tmp/ql-to-deb/cl-plus-ssl_20150302-1_amd64.changes 60B1CB4E
Successfully signed dsc and changes files
 signfile /tmp/ql-to-deb/cl-csv_20150302-1.dsc 60B1CB4E
 signfile /tmp/ql-to-deb/cl-csv_20150302-1_amd64.changes 60B1CB4E
Successfully signed dsc and changes files
 signfile /tmp/ql-to-deb/cl-db3_20150302-1.dsc 60B1CB4E
 signfile /tmp/ql-to-deb/cl-db3_20150302-1_amd64.changes 60B1CB4E
Successfully signed dsc and changes files
 signfile /tmp/ql-to-deb/cl-drakma_1.3.13-1.dsc 60B1CB4E
 signfile /tmp/ql-to-deb/cl-drakma_1.3.13-1_amd64.changes 60B1CB4E
Successfully signed dsc and changes files
 signfile /tmp/ql-to-deb/cl-esrap_20150302-1.dsc 60B1CB4E
 signfile /tmp/ql-to-deb/cl-esrap_20150302-1_amd64.changes 60B1CB4E
Successfully signed dsc and changes files
 signfile /tmp/ql-to-deb/cl-graph_20150407-1.dsc 60B1CB4E
 signfile /tmp/ql-to-deb/cl-graph_20150407-1_amd64.changes 60B1CB4E
Successfully signed dsc and changes files
 signfile /tmp/ql-to-deb/hunchentoot_1.2.31-1.dsc 60B1CB4E
 signfile /tmp/ql-to-deb/hunchentoot_1.2.31-1_amd64.changes 60B1CB4E
Successfully signed dsc and changes files
 signfile /tmp/ql-to-deb/cl-local-time_20150407-1.dsc 60B1CB4E
 signfile /tmp/ql-to-deb/cl-local-time_20150407-1_amd64.changes 60B1CB4E
Successfully signed dsc and changes files
 signfile /tmp/ql-to-deb/cl-lparallel_20150302-1.dsc 60B1CB4E
 signfile /tmp/ql-to-deb/cl-lparallel_20150302-1_amd64.changes 60B1CB4E
Successfully signed dsc and changes files
 signfile /tmp/ql-to-deb/cl-nibbles_20150407-1.dsc 60B1CB4E
 signfile /tmp/ql-to-deb/cl-nibbles_20150407-1_amd64.changes 60B1CB4E
Successfully signed dsc and changes files
 signfile /tmp/ql-to-deb/cl-qmynd_20150302-1.dsc 60B1CB4E
 signfile /tmp/ql-to-deb/cl-qmynd_20150302-1_amd64.changes 60B1CB4E
Successfully signed dsc and changes files
 signfile /tmp/ql-to-deb/cl-trivial-backtrace_20150407-1.dsc 60B1CB4E
 signfile /tmp/ql-to-deb/cl-trivial-backtrace_20150407-1_amd64.changes 60B1CB4E
Successfully signed dsc and changes files
Ok, with all tested and signed, it's time we upload our packages on debian servers for our dear debian users to be able to use newer and better versions of their beloved Common Lisp librairies:
$ /vagrant/build/bin/ql-to-deb upload
Trying to upload package to ftp-master (ftp.upload.debian.org)
Checking signature on .changes
gpg: Signature made Sat 02 May 2015 05:06:48 PM MSK using RSA key ID 60B1CB4E
gpg: Good signature from "Dimitri Fontaine <dim@tapoueh.org>"
Good signature on /tmp/ql-to-deb/cl-plus-ssl_20150302-1_amd64.changes.
Checking signature on .dsc
gpg: Signature made Sat 02 May 2015 05:06:46 PM MSK using RSA key ID 60B1CB4E
gpg: Good signature from "Dimitri Fontaine <dim@tapoueh.org>"
Good signature on /tmp/ql-to-deb/cl-plus-ssl_20150302-1.dsc.
Uploading to ftp-master (via ftp to ftp.upload.debian.org):
  Uploading cl-plus-ssl_20150302-1.dsc: done.
  Uploading cl-plus-ssl_20150302.orig.tar.gz: done.
  Uploading cl-plus-ssl_20150302-1.debian.tar.xz: done.
  Uploading cl-plus-ssl_20150302-1_all.deb: done.
  Uploading cl-plus-ssl_20150302-1_amd64.changes: done.
Successfully uploaded packages.
Of course the same text or abouts is then repeated for all the other packages. Enjoy using Common Lisp in debian! Oh and remember, the only reason I've written ql-to-deb and signed myself up to maintain those upteens Common Lisp librairies as debian package is to be able to properly package pgloader in debian, as you can see at https://packages.debian.org/sid/pgloader and in particular in the Other Packages Related to pgloader section of the debian source package for pgloader at https://packages.debian.org/source/sid/pgloader. That level of effort is done to ensure that we respect the Debian Social Contract wherein debian ensures its users that it's possible to rebuild anything from sources as found in the debian repositories.

26 April 2015

Petter Reinholdtsen: First Jessie based Debian Edu beta release

I am happy to report that the Debian Edu team sent out this announcement today:
the Debian Edu / Skolelinux project is pleased to announce the first
*beta* release of Debian Edu "Jessie" 8.0+edu0~b1, which for the first
time is composed entirely of packages from the current Debian stable
release, Debian 8 "Jessie".
(As most reading this will know, Debian "Jessie" hasn't actually been
released by now. The release is still in progress but should finish
later today ;)
We expect to make a final release of Debian Edu "Jessie" in the coming
weeks, timed with the first point release of Debian Jessie. Upgrades
from this beta release of Debian Edu Jessie to the final release will
be possible and encouraged!
Please report feedback to debian-edu@lists.debian.org and/or submit
bugs: http://wiki.debian.org/DebianEdu/HowTo/ReportBugs
Debian Edu - sometimes also known as "Skolelinux" - is a complete
operating system for schools, universities and other
organisations. Through its pre- prepared installation profiles
administrators can install servers, workstations and laptops which
will work in harmony on the school network.  With Debian Edu, the
teachers themselves or their technical support staff can roll out a
complete multi-user, multi-machine study environment within hours or
days.
Debian Edu is already in use at several hundred schools all over the
world, particularly in Germany, Spain and Norway. Installations come
with hundreds of applications pre-installed, plus the whole Debian
archive of thousands of compatible packages within easy reach.
For those who want to give Debian Edu Jessie a try, download and
installation instructions are available, including detailed
instructions in the manual explaining the first steps, such as setting
up a network or adding users.  Please note that the password for the
user your prompted for during installation must have a length of at
least 5 characters!
== Where to download ==
A multi-architecture CD / usbstick image (649 MiB) for network booting
can be downloaded at the following locations:
    http://ftp.skolelinux.org/skolelinux-cd/debian-edu-8.0+edu0~b1-CD.iso
    rsync -avzP ftp.skolelinux.org::skolelinux-cd/debian-edu-8.0+edu0~b1-CD.iso . 
The SHA1SUM of this image is: 54a524d16246cddd8d2cfd6ea52f2dd78c47ee0a
Alternatively an extended DVD / usbstick image (4.9 GiB) is also
available, with more software included (saving additional download
time):
    http://ftp.skolelinux.org/skolelinux-cd/debian-edu-8.0+edu0~b1-USB.iso
    rsync -avzP ftp.skolelinux.org::skolelinux-cd/debian-edu-8.0+edu0~b1-USB.iso 
The SHA1SUM of this image is: fb1f1504a490c077a48653898f9d6a461cb3c636
Sources are available from the Debian archive, see
http://ftp.debian.org/debian-cd/8.0.0/source/ for some download
options.
== Debian Edu Jessie manual in seven languages ==
Please see https://wiki.debian.org/DebianEdu/Documentation/Jessie/ for
the English version of the Debian Edu jessie manual.
This manual has been fully translated to German, French, Italian,
Danish, Dutch and Norwegian Bokm l. A partly translated version exists
for Spanish.  See http://maintainer.skolelinux.org/debian-edu-doc/ for
online version of the translated manual.
More information about Debian 8 "Jessie" itself is provided in the
release notes and the installation manual:
- http://www.debian.org/releases/jessie/releasenotes
- http://www.debian.org/releases/jessie/installmanual
== Errata / known problems ==
    It takes up to 15 minutes for a changed hostname to be updated via
    DHCP (#780461).
    The hostname script fails to update LTSP server hostname (#783087). 
Workaround: run update-hostname-from-ip on the client to update the
hostname immediately.
Check https://wiki.debian.org/DebianEdu/Status/Jessie for a possibly
more current and complete list.
== Some more details about Debian Edu 8.0+edu0~b1 Codename Jessie released 2015-04-25 ==
=== Software updates ===
Everything which is new in Debian 8 Jessie, e.g.:
 * Linux kernel 3.16.7-ctk9; for the i386 architecture, support for
   i486 processors has been dropped; oldest supported ones: i586 (like
   Intel Pentium and AMD K5).
 * Desktop environments KDE Plasma Workspaces 4.11.13, GNOME 3.14,
   Xfce 4.12, LXDE 0.5.6
   * new optional desktop environment: MATE 1.8
   * KDE Plasma Workspaces is installed by default; to choose one of
     the others see the manual.
 * the browsers Iceweasel 31 ESR and Chromium 41
 * LibreOffice 4.3.3
 * GOsa 2.7.4
 * LTSP 5.5.4
 * CUPS print system 1.7.5
 * new boot framework: systemd
 * Educational toolbox GCompris 14.12
 * Music creator Rosegarden 14.02
 * Image editor Gimp 2.8.14
 * Virtual stargazer Stellarium 0.13.1
 * golearn 0.9
 * tuxpaint 0.9.22
 * New version of debian-installer from Debian Jessie.
 * Debian Jessie includes about 43000 packages available for installation.
 * More information about Debian 8 Jessie is provided in its release
   notes and the installation manual, see the link above.
=== Installation changes ===
    Installations done via PXE now also install firmware automatically
    for the hardware present.
=== Fixed bugs ===
A number of bugs have been fixed in this release; the most noticeable
from a user perspective:
 * Inserting incorrect DNS information in Gosa will no longer break
   DNS completely, but instead stop DNS updates until the incorrect
   information is corrected (710362)
 * shutdown-at-night now shuts the system down if gdm3 is used (775608). 
=== Sugar desktop removed ===
As the Sugar desktop was removed from Debian Jessie, it is also not
available in Debian Edu jessie.
== About Debian Edu / Skolelinux ==
Debian Edu, also known as Skolelinux, is a Linux distribution based on
Debian providing an out-of-the box environment of a completely
configured school network. Directly after installation a school server
running all services needed for a school network is set up just
waiting for users and machines being added via GOsa , a comfortable
Web-UI. A netbooting environment is prepared using PXE, so after
initial installation of the main server from CD or USB stick all other
machines can be installed via the network. The provided school server
provides LDAP database and Kerberos authentication service,
centralized home directories, DHCP server, web proxy and many other
services.  The desktop contains more than 60 educational software
packages and more are available from the Debian archive, and schools
can choose between KDE, GNOME, LXDE, Xfce and MATE desktop
environment.
== About Debian ==
The Debian Project was founded in 1993 by Ian Murdock to be a truly
free community project. Since then the project has grown to be one of
the largest and most influential open source projects. Thousands of
volunteers from all over the world work together to create and
maintain Debian software. Available in 70 languages, and supporting a
huge range of computer types, Debian calls itself the universal
operating system.
== Thanks ==
Thanks to everyone making Debian and Debian Edu / Skolelinux happen!
You rock.

13 April 2015

Santiago Garc a Manti n: haproxy as a very very overloaded sslh

After using haproxy at work for some time I realized that it can be configured for a lot of things, for example: it knows about SNI (on ssl is the method we use to know what host the client is trying to reach so that we know what certificate to present and thus we can multiplex several virtual hosts on the same ssl IP:port) and it also knows how to make transparent proxy connections (the connections go through haproxy but the ending server will think they are arriving directly from the client, as it will see the client's IP as the source IP of the packages).With this two little features, which are available on haproxy 1.5 (Jessie's version has them all), I thought I could give it a try to substitute sslh with haproxy giving me a lot of possibilities that sslh cannot do.Having this in mind I thought I could multiplex several ssl services, not only https but also openvpn or similar, on the 443 port and also allow this services to arrive transparently to the final server. Thus what I wanted was not to mimic sslh (which can be done with haproxy) but to get the semantic I needed, which is similar to sslh but with more power and with a little different behaviour, cause I liked it that way.There is however one caveat that I don't like about this setup and it is that to achieve the transparency one has to run haproxy as root, which is not really something one likes :-( so, having transparency is great, but we'll be taking some risks here which I personally don't like, to me it isn't worth it.Anyway, here is the setup, it basically consists of a setup on haproxy but if we want transparency we'll have to add to it a routing and iptables setup, I'll describe here the whole setupHere is what you need to define on /etc/haproxy/haproxy.cfg:frontend ft_ssl bind 192.168.0.1:443 mode tcp option tcplog tcp-request inspect-delay 5s tcp-request content accept if req_ssl_hello_type 1 acl sslvpn req_ssl_sni -i vpn.example.net use_backend bk_sslvpn if sslvpn use_backend bk_web if req_ssl_sni -m found default_backend bk_ssh backend bk_sslvpn mode tcp source 0.0.0.0 usesrc clientip server srvvpn vpnserver:1194 backend bk_web mode tcp source 0.0.0.0 usesrc clientip server srvhttps webserver:443 backend bk_ssh mode tcp source 0.0.0.0 usesrc clientip server srvssh sshserver:22 An example of a transparent setup can be found here but lacks some details, for example, if you need to redirect the traffic to the local haproxy you'll want to use the xt_TPROXY, there is a better doc for that at squid's wiki. Anyway, if you are playing just with your own machine, like we typically do with sslh, you won't need the TPROXY power, as packets will come straight to your 443, so haproxy will be able to get the without any problem. The problem will come if you are using transparency (source 0.0.0.0 usesrc clientip) because then packets coming out of haproxy will be carrying the ip of the real client, and thus the answers of the backend will go to that client (but with different ports and other tcp data), so it will not work. We'll have to get those packets back to haproxy, for that what we'll do is mark the packages with iptables and then route them to the loopback interface using advanced routing. This is where all the examples will tell you to use iptables' mangle table with rules marking on PREROUTING but that won't work out if you are having all the setup (frontend and backends) in just one box, instead you'll have to write those rules to work on the OUTPUT chain of the mangle table, having something like this:*mangle :PREROUTING ACCEPT :INPUT ACCEPT :FORWARD ACCEPT :OUTPUT ACCEPT :POSTROUTING ACCEPT :DIVERT - -A OUTPUT -s public_ip -p tcp --sport 22 -o public_iface -j DIVERT -A OUTPUT -s public_ip -p tcp --sport 443 -o public_iface -j DIVERT -A OUTPUT -s public_ip -p tcp --sport 1194 -o public_iface -j DIVERT -A DIVERT -j MARK --set-mark 1 -A DIVERT -j ACCEPT COMMIT Take that just as an example, better suggestions on how to know what traffic to send to DIVERT are welcome. The point here is that if you are sending the service to some other box you can do it on PREROUTIING, but if you are sending the service to the very same box of haproxy you'll have to mark the packages on the OUTPUT chain.Once we have the packets marked we just need to route them, something like this will work out perfectly:ip rule add fwmark 1 lookup 100 ip route add local 0.0.0.0/0 dev lo table 100 And that's all for this crazy setup. Of course, if, like me, you don't like the root implication of the transparent setup, you can remove the "source 0.0.0.0 usesrc clientip" lines on the backends and forget about transparency (connections to the backend will come from your local IP), but you'll be able to run haproxy with dropped privileges and you'll just need the plain haproxy.cfg setup and not the weird iptables and advanced routing setup.Hope you like the article, btw, I'd like to point out the main difference of this setup vs sslh, it is that I'm only sending the packages to the ssl providers if the client is sending SNI info, otherwise I'm sending them to the ssh server, while sslh will send ssl clients without SNI also to the ssl provider. If your setup mimics sslh and you want to comment on it, feel free to do it.

4 December 2014

Chris Lamb: Don't ask your questions in private

(If I've linked you to this page, it is my feeble attempt to provide a more convincing justification.)


I often receive instant messages or emails requesting help or guidance at work or on one of my various programming projects. When asked why they asked privately, the responses vary; mostly along the lines of it simply being an accident, not knowing where else to ask, as well as not wishing to "disturb" others with their bespoke question. Some will be more candid and simply admit that they were afraid of looking unknowledgable in front of others. It is always tempting to simply reply with the answer, especially as helping another human is inherently rewarding unless one is a psychopath. However, one can actually do more good overall by insisting the the question is re-asked in a more public forum.
This is for many reasons. Most obviously, public questions are simply far more efficient as soon as more than one person asks that question the response can be found in a search engine or linked to in the future. These time savings soon add up, meaning that simply more stuff can be done in any given day. After all, most questions are not as unique as people think. Secondly, a private communication cannot be corrected or elaborated on if someone else notices it is incorrect or incomplete. Even this rather banal point is more subtle that it first appears the lack of possible corrections deprives both the person asking and the person responding of the true and correct answer. Lastly, conversations that happen in private are depriving others of the answer as well. Perhaps someone was curious but hadn't got around to asking? Maybe the answer or even the question! contains a clue to solving some other issue. None of this can happen if this is occurs behind closed doors. (There are lots of subtler reasons too in a large organisation or team, simply knowing what other people are curious about can be curiously valuable information.)
Note that this is not as you might immediately suspect simply a way of ensuring that one gets the public recognition or "kudos" from being seen helping others. I wouldn't deny that technical communities work on a gift economy basis to some degree, but to attribute all acts of assistance as "selfish" and value-extracting would be to take the argument too far in the other direction. Saying that, the lure and appeal of public recognition should not be understated and can certainly provide an incentive to elaborate and provide a generally superior response.

More philosophically, there's also something fundamentally "honest" about airing issues in an appropriately public and transparent manner. I feel it promotes a culture of egoless conversations, of being able to admit one's mistakes and ultimately a healthy personal mindset. So please, take care not only in the way you phrase and frame your question, but also consider wider context in which you are asking it. And don't take it too personally if I ask you to re-ask elsewhere...

27 October 2014

Petter Reinholdtsen: First Jessie based Debian Edu released (alpha0)

I am happy to report that I on behalf of the Debian Edu team just sent out this announcement:
The Debian Edu Team is pleased to announce the release of Debian Edu
Jessie 8.0+edu0~alpha0
Debian Edu is a complete operating system for schools. Through its
various installation profiles you can install servers, workstations
and laptops which will work together on the school network. With
Debian Edu, the teachers themselves or their technical support can
roll out a complete multi-user multi-machine study environment within
hours or a few days. Debian Edu comes with hundreds of applications
pre-installed, but you can always add more packages from Debian.
For those who want to give Debian Edu Jessie a try, download and
installation instructions are available, including detailed
instructions in the manual[1] explaining the first steps, such as
setting up a network or adding users. Please note that the password
for the user your prompted for during installation must have a length
of at least 5 characters!
 [1] <URL: https://wiki.debian.org/DebianEdu/Documentation/Jessie >
Would you like to give your school's computer a longer life? Are you
tired of sneaker administration, running from computer to computer
reinstalling the operating system? Would you like to administrate all
the computers in your school using only a couple of hours every week?
Check out Debian Edu Jessie!
Skolelinux is used by at least two hundred schools all over the world,
mostly in Germany and Norway.
About Debian Edu and Skolelinux
===============================
Debian Edu, also known as Skolelinux[2], is a Linux distribution based
on Debian providing an out-of-the box environment of a completely
configured school network. Immediately after installation a school
server running all services needed for a school network is set up just
waiting for users and machines being added via GOsa , a comfortable
Web-UI. A netbooting environment is prepared using PXE, so after
initial installation of the main server from CD or USB stick all other
machines can be installed via the network.  The provided school server
provides LDAP database and Kerberos authentication service,
centralized home directories, DHCP server, web proxy and many other
services.  The desktop contains more than 60 educational software
packages[3] and more are available from the Debian archive, and
schools can choose between KDE, Gnome, LXDE, Xfce and MATE desktop
environment.
 [2] <URL: http://www.skolelinux.org/ >
 [3] <URL: http://people.skolelinux.org/pere/blog/Educational_applications_included_in_Debian_Edu___Skolelinux__the_screenshot_collection____.html >
Full release notes and manual
=============================
Below the download URLs there is a list of some of the new features
and bugfixes of Debian Edu 8.0+edu0~alpha0 Codename Jessie. The full
list is part of the manual. (See the feature list in the manual[4] for
the English version.) For some languages manual translations are
available, see the manual translation overview[5].
 [4] <URL: https://wiki.debian.org/DebianEdu/Documentation/Jessie/Features >
 [5] <URL: http://maintainer.skolelinux.org/debian-edu-doc/ >
Where to get it
---------------
To download the multiarch netinstall CD release (624 MiB) you can use
 * ftp://ftp.skolelinux.org/skolelinux-cd/debian-edu-8.0+edu0~alpha0-CD.iso
 * http://ftp.skolelinux.org/skolelinux-cd/debian-edu-8.0+edu0~alpha0-CD.iso
 * rsync -avzP ftp.skolelinux.org::skolelinux-cd/debian-edu-8.0+edu0~alpha0-CD.iso .
The SHA1SUM of this image is: 361188818e036ce67280a572f757de82ebfeb095
New features for Debian Edu 8.0+edu0~alpha0 Codename Jessie released 2014-10-27
===============================================================================
Installation changes
--------------------
 * PXE installation now installs firmware automatically for the hardware present.
Software updates
----------------
Everything which is new in Debian Jessie 8.0, eg:
 * Linux kernel 3.16.x
 * Desktop environments KDE "Plasma" 4.11.12, GNOME 3.14, Xfce 4.10,
   LXDE 0.5.6 and MATE 1.8 (KDE "Plasma" is installed by default; to
   choose one of the others see manual.)
 * the browsers Iceweasel 31 ESR and Chromium 38 
 * !LibreOffice 4.3.3
 * GOsa 2.7.4
 * LTSP 5.5.4
 * CUPS print system 1.7.5
 * new boot framework: systemd
 * Educational toolbox GCompris 14.07 
 * Music creator Rosegarden 14.02
 * Image editor Gimp 2.8.14
 * Virtual stargazer Stellarium 0.13.0
 * golearn 0.9
 * tuxpaint 0.9.22
 * New version of debian-installer from Debian Jessie.
 * Debian Jessie includes about 42000 packages available for
   installation.
 * More information about Debian Jessie 8.0 is provided in the release
   notes[6] and the installation manual[7].
 [6] <URL: http://www.debian.org/releases/jessie/releasenotes >
 [7] <URL: http://www.debian.org/releases/jessie/installmanual >
Fixed bugs
----------
 * Inserting incorrect DNS information in Gosa will no longer break
   DNS completely, but instead stop DNS updates until the incorrect
   information is corrected (Debian bug #710362)
 * and many others.
Documentation and translation updates
------------------------------------- 
 * The Debian Edu Jessie Manual is fully translated to German, French,
   Italian, Danish and Dutch. Partly translated versions exist for
   Norwegian Bokmal and Spanish.
Other changes
-------------
 * Due to new Squid settings, powering off or rebooting the main
   server takes more time.
 * To manage printers localhost:631 has to be used, currently www:631
   doesn't work.
Regressions / known problems
----------------------------
 * Installing LTSP chroot fails with a bug related to eatmydata about
   exim4-config failing to run its postinst (see Debian bug #765694
   and Debian bug #762103).
 * Munin collection is not properly configured on clients (Debian bug
   #764594).  The fix is available in a newer version of munin-node.
 * PXE setup for Main Server and Thin Client Server setup does not
   work when installing on a machine without direct Internet access.
   Will be fixed when Debian bug #766960 is fixed in Jessie.
See the status page[8] for the complete list.
 [8] <URL: https://wiki.debian.org/DebianEdu/Status/Jessie >
How to report bugs
------------------
<URL: http://wiki.debian.org/DebianEdu/HowTo/ReportBugs >
About Debian
============
The Debian Project was founded in 1993 by Ian Murdock to be a truly
free community project. Since then the project has grown to be one of
the largest and most influential open source projects. Thousands of
volunteers from all over the world work together to create and
maintain Debian software. Available in 70 languages, and supporting a
huge range of computer types, Debian calls itself the universal
operating system.
Contact Information
For further information, please visit the Debian web pages[9] or send
mail to press@debian.org.
 [9] <URL: http://www.debian.org/ >

14 October 2014

Julian Andres Klode: Key transition

I started transitioning from 1024D to 4096R. The new key is available at: https://people.debian.org/~jak/pubkey.gpg and the keys.gnupg.net key server. A very short transition statement is available at: https://people.debian.org/~jak/transition-statement.txt and included below (the http version might get extended over time if needed). The key consists of one master key and 3 sub keys (signing, encryption, authentication). The sub keys are stored on an OpenPGP v2 Smartcard. That s really cool, isn t it? Somehow it seems that GnuPG 1.4.18 also works with 4096R keys on this smartcard (I accidentally used it instead of gpg2 and it worked fine), although only GPG 2.0.13 and newer is supposed to work.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1,SHA512
Because 1024D keys are not deemed secure enough anymore, I switched to
a 4096R one.
The old key will continue to be valid for some time, but i prefer all
future correspondence to come to the new one.  I would also like this
new key to be re-integrated into the web of trust.  This message is
signed by both keys to certify the transition.
the old key was:
pub   1024D/00823EC2 2007-04-12
      Key fingerprint = D9D9 754A 4BBA 2E7D 0A0A  C024 AC2A 5FFE 0082 3EC2
And the new key is:
pub   4096R/6B031B00 2014-10-14 [expires: 2017-10-13]
      Key fingerprint = AEE1 C8AA AAF0 B768 4019  C546 021B 361B 6B03 1B00
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iEYEARECAAYFAlQ9j+oACgkQrCpf/gCCPsKskgCgiRn7DoP5RASkaZZjpop9P8aG
zhgAnjHeE8BXvTSkr7hccNb2tZsnqlTaiQIcBAEBCgAGBQJUPY/qAAoJENc8OeVl
gLOGZiMP/1MHubKmA8aGDj8Ow5Uo4lkzp+A89vJqgbm9bjVrfjDHZQIdebYfWrjr
RQzXdbIHnILYnUfYaOHUzMxpBHya3rFu6xbfKesR+jzQf8gxFXoBY7OQVL4Ycyss
4Y++g9m4Lqm+IDyIhhDNY6mtFU9e3CkljI52p/CIqM7eUyBfyRJDRfeh6c40Pfx2
AlNyFe+9JzYG1i3YG96Z8bKiVK5GpvyKWiggo08r3oqGvWyROYY9E4nLM9OJu8EL
GuSNDCRJOhfnegWqKq+BRZUXA2wbTG0f8AxAuetdo6MKmVmHGcHxpIGFHqxO1QhV
VM7VpMj+bxcevJ50BO5kylRrptlUugTaJ6il/o5sfgy1FdXGlgWCsIwmja2Z/fQr
ycnqrtMVVYfln9IwDODItHx3hSwRoHnUxLWq8yY8gyx+//geZ0BROonXVy1YEo9a
PDplOF1HKlaFAHv+Zq8wDWT8Lt1H2EecRFN+hov3+lU74ylnogZLS+bA7tqrjig0
bZfCo7i9Z7ag4GvLWY5PvN4fbws/5Yz9L8I4CnrqCUtzJg4vyA44Kpo8iuQsIrhz
CKDnsoehxS95YjiJcbL0Y63Ed4mkSaibUKfoYObv/k61XmBCNkmNAAuRwzV7d5q2
/w3bSTB0O7FHcCxFDnn+tiLwgiTEQDYAP9nN97uibSUCbf98wl3/
=VRZJ
-----END PGP SIGNATURE-----

Filed under: Uncategorized

Julian Andres Klode: Key transition

I started transitioning from 1024D to 4096R. The new key is available at: https://people.debian.org/~jak/pubkey.gpg and the keys.gnupg.net key server. A very short transition statement is available at: https://people.debian.org/~jak/transition-statement.txt and included below (the http version might get extended over time if needed). The key consists of one master key and 3 sub keys (signing, encryption, authentication). The sub keys are stored on an OpenPGP v2 Smartcard. That s really cool, isn t it? Somehow it seems that GnuPG 1.4.18 also works with 4096R keys on this smartcard (I accidentally used it instead of gpg2 and it worked fine), although only GPG 2.0.13 and newer is supposed to work.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1,SHA512
Because 1024D keys are not deemed secure enough anymore, I switched to
a 4096R one.
The old key will continue to be valid for some time, but i prefer all
future correspondence to come to the new one.  I would also like this
new key to be re-integrated into the web of trust.  This message is
signed by both keys to certify the transition.
the old key was:
pub   1024D/00823EC2 2007-04-12
      Key fingerprint = D9D9 754A 4BBA 2E7D 0A0A  C024 AC2A 5FFE 0082 3EC2
And the new key is:
pub   4096R/6B031B00 2014-10-14 [expires: 2017-10-13]
      Key fingerprint = AEE1 C8AA AAF0 B768 4019  C546 021B 361B 6B03 1B00
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iEYEARECAAYFAlQ9j+oACgkQrCpf/gCCPsKskgCgiRn7DoP5RASkaZZjpop9P8aG
zhgAnjHeE8BXvTSkr7hccNb2tZsnqlTaiQIcBAEBCgAGBQJUPY/qAAoJENc8OeVl
gLOGZiMP/1MHubKmA8aGDj8Ow5Uo4lkzp+A89vJqgbm9bjVrfjDHZQIdebYfWrjr
RQzXdbIHnILYnUfYaOHUzMxpBHya3rFu6xbfKesR+jzQf8gxFXoBY7OQVL4Ycyss
4Y++g9m4Lqm+IDyIhhDNY6mtFU9e3CkljI52p/CIqM7eUyBfyRJDRfeh6c40Pfx2
AlNyFe+9JzYG1i3YG96Z8bKiVK5GpvyKWiggo08r3oqGvWyROYY9E4nLM9OJu8EL
GuSNDCRJOhfnegWqKq+BRZUXA2wbTG0f8AxAuetdo6MKmVmHGcHxpIGFHqxO1QhV
VM7VpMj+bxcevJ50BO5kylRrptlUugTaJ6il/o5sfgy1FdXGlgWCsIwmja2Z/fQr
ycnqrtMVVYfln9IwDODItHx3hSwRoHnUxLWq8yY8gyx+//geZ0BROonXVy1YEo9a
PDplOF1HKlaFAHv+Zq8wDWT8Lt1H2EecRFN+hov3+lU74ylnogZLS+bA7tqrjig0
bZfCo7i9Z7ag4GvLWY5PvN4fbws/5Yz9L8I4CnrqCUtzJg4vyA44Kpo8iuQsIrhz
CKDnsoehxS95YjiJcbL0Y63Ed4mkSaibUKfoYObv/k61XmBCNkmNAAuRwzV7d5q2
/w3bSTB0O7FHcCxFDnn+tiLwgiTEQDYAP9nN97uibSUCbf98wl3/
=VRZJ
-----END PGP SIGNATURE-----

Filed under: Uncategorized

1 March 2014

Michael Prokop: Jenkins on-demand slave selection through labels

Problem description: One of my customers had a problem with their Selenium tests in the Jenkins continuous integration system. While Perl s Test::WebDriver still worked just fine the Selenium tests using Ruby s selenium-webdriver suddenly reported failures. The problem was caused by Debian wheezy s upgrade of the Iceweasel web browser. Debian originally shipped Iceweasel version 17.0.10esr-1~deb7u1 in wheezy, but during a security-update version 24.3.0esr-1~deb7u1 was brought in through the wheezy-security channel. Because the selenium tests are used in an automated fashion in a quite large and long-running build pipeline we immediately rolled back to Iceweasel version 17.0.10esr-1~deb7u1 so everything can continue as expected. Of course we wanted to get the new Iceweasel version up and running, but we didn t want to break the existing workflow while working on it. This is where on-demand slave selection through labels comes in. Basics: As soon as you re using Jenkins slaves you can instruct Jenkins to run a specific project on a particular (slave) node. By attaching labels to your slaves you can also use a label instead of a specific node name, providing more flexibility and scalability (to e.g. avoid problems if a specific node is down or you want to scale to more systems). Then Jenkins decides which of the nodes providing the according label should be considered for job execution. In the following screenshot a job uses the selenium label to restrict its execution to the slaves providing selenium and currently there are two nodes available providing this label: TIP 1: Visiting $JENKINS_SERVER/label/$label/ provides a list of slaves that provide that given $label (as well as list of projects that use $label in their configuration), like:

TIP 2: Execute the following script on $JENKINS_SERVER/script to get a list of available labels of your Jenkins system:
import hudson.model.*
labels = Hudson.instance.getLabels()
labels.each  label -> println label.name  
Solution: In the according customer setup we re using the swarm plugin (with automated Debian deployment through Grml s netscript boot option, grml-debootstrap + Puppet) to automatically connect our Jenkins slaves to Jenkins master without any manual intervention. The swarm plugin allows you to define the labels through the -labels command line option. By using the NodeLabel Parameter plugin we can configure additional parameters in Jenkins jobs: node and label . The label parameter allows us to execute the jobs on the nodes providing the requested label: This is what we can use to gradually upgrade from the old Iceweasel version to the new one by keeping a given set of slaves at the old Iceweasel version while we re upgrading other nodes to the new Iceweasel version (same for the selenium-server version which we want to also control). We can include the version number of the Iceweasel and selenium-server packages inside the labels we announce through the swarm slaves, with something like:
if [ -r /etc/init.d/selenium-server ] ; then
  FLAGS="selenium"
  ICEWEASEL_VERSION="$(dpkg-query --show --showformat='$ Version ' iceweasel)"
  if [ -n "$ICEWEASEL_VERSION" ] ; then
    ICEWEASEL_FLAG="iceweasel-$ ICEWEASEL_VERSION%%.* "
    EXTRA_FLAGS="$EXTRA_FLAGS $ICEWEASEL_FLAG"
  fi
  SELENIUM_VERSION="$(dpkg-query --show --showformat='$ Version ' selenium-server)"
  if [ -n "$SELENIUM_VERSION" ] ; then
    SELENIUM_FLAG="selenium-$ SELENIUM_VERSION%-* "
    EXTRA_FLAGS="$EXTRA_FLAGS $SELENIUM_FLAG"
  fi
fi
Then by using -labels $FLAGS EXTRA_FLAGS in the swarm invocation script we end up with labels like selenium iceweasel-24 selenium-2.40.0 for the slaves providing the Iceweasel v24 and selenium v2.40.0 Debian packages and selenium iceweasel-17 selenium-2.40.0 for the slaves providing Iceweasel v17 and selenium v2.40.0. This is perfect for our needs, because instead of using the selenium label (which is still there) we can configure the selenium jobs that should continue to work as usual to default to the slaves with the iceweasel-17 label now. The development related jobs though can use label iceweasel-24 and fail as often as needed without interrupting the build pipeline used for production. To illustrate this here we have slave selenium-client2 providing Iceweasel v17 with selenium-server v2.40. When triggering the production selenium job it will get executed on selenium-client2, because that s the slave providing the requested labels: Whereas the development selenium job can point to the slaves providing Iceweasel v24, so it will be executed on slave selenium-client1 here: This setup allowed us to work on the selenium Ruby tests while not conflicting with any production build pipeline. By the time I m writing about this setup we ve already finished the migration to support Iceweasel v24 and the infrastructure is ready for further Iceweasel and selenium-server upgrades.

Ben Armstrong: Invisible CSS animations on Iceweasel consuming CPU

Thanks to bernat on #debian @ irc.debian.org for helping me track down this bug and devise a workaround. When working on my wife s netbook, I noticed that when idling on Facebook in iceweasel 24.3.0esr-1, the process was taking far too much CPU. I then retested on a wheezy system with the release iceweasel from mozilla.d.n, which at that time was 26, and I later upgraded to 27. Same problem there, too, on both versions. In fact, it seems the slowdown was amplified by the fact that I was running iceweasel in vnc4server, not the worlds most efficient X implementation. Even with all these versions tested, I have yet to file a Debian bug, as I will need some time on a system where the slowdown is noticeable and I m using a current Debian version. But I wanted to post now to give props to bernat for his help. If you think you have this issue, go read his article linked above, which contains the workaround.

Next.

Previous.